intrinsic reward
Maximum-Entropy Exploration with Future State-Action Visitation Measures
Bolland, Adrien, Lambrechts, Gaspard, Ernst, Damien
Maximum entropy reinforcement learning motivates agents to explore states and actions to maximize the entropy of some distribution, typically by providing additional intrinsic rewards proportional to that entropy function. In this paper, we study intrinsic rewards proportional to the entropy of the discounted distribution of state-action features visited during future time steps. This approach is motivated by two results. First, we show that the expected sum of these intrinsic rewards is a lower bound on the entropy of the discounted distribution of state-action features visited in trajectories starting from the initial states, which we relate to an alternative maximum entropy objective. Second, we show that the distribution used in the intrinsic reward definition is the fixed point of a contraction operator and can therefore be estimated off-policy. Experiments highlight that the new objective leads to improved visitation of features within individual trajectories, in exchange for slightly reduced visitation of features in expectation over different trajectories, as suggested by the lower bound. It also leads to improved convergence speed for learning exploration-only agents. Control performance remains similar across most methods on the considered benchmarks.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > Portugal > Braga > Braga (0.04)
- North America > United States (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.93)
A Algorithms
We directly adopt the official default setting for Atari games. B.2 Minecraft Environment Settings Table 1 outlines how we set up and initialize the environment for each harvest task. Our method is tested in two different biomes: plains and sunflower plains. Both the plains and sunflower plains offer a wider field of view. In Minecraft, the action space is an 8-dimensional multi-discrete space.
- North America > United States > California > Alameda County > Berkeley (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
Discovering Creative Behaviors through DUPLEX: Diverse Universal Features for Policy Exploration
The ability to approach the same problem from different angles is a cornerstone of human intelligence that leads to robust solutions and effective adaptation to problem variations. In contrast, current RL methodologies tend to lead to policies that settle on a single solution to a given problem, making them brittle to problem variations. Replicating human flexibility in reinforcement learning agents is the challenge that we explore in this work.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Africa > Rwanda > Kigali > Kigali (0.04)
- (4 more...)
tion error; right: surprise. α is a hyperparameter we scanned for. Implement a new IM baseline: ICM (Pathak 2017 [23]
We thank the reviewers for the thorough feedbacks. Based on those, we have made numerous improvements. Original code is for decrete actions.) IM baseline with the random object. The plot is similar to "tool" in Figure 1 and we omit it due to space constraints. Rev. #1 suggested that the environments could be solved by classic planning methods.
- North America > Montserrat (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.04)